Moments in Time Dataset: one million videos for event understanding
نویسندگان
چکیده
We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical or not in time ("opening" means "closing" in reverse order), and transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately and jointly three modalities: spatial, temporal and auditory. The Moments in Time dataset designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.
منابع مشابه
Videos from the 2013 Boston Marathon: An Event Reconstruction Dataset for Synchronization and Localization
Event reconstruction is about reconstructing the event truth from a large amount of videos which capture different moments of the same event at different positions from different perspectives. Up to now, there are no related public datasets. In this paper, we introduce the first real-world event reconstruction dataset to promote research in this field. We focus on synchronization and localizati...
متن کاملAction Change Detection in Video Based on HOG
Background and Objectives: Action recognition, as the processes of labeling an unknown action of a query video, is a challenging problem, due to the event complexity, variations in imaging conditions, and intra- and inter-individual action-variability. A number of solutions proposed to solve action recognition problem. Many of these frameworks suppose that each video sequence includes only one ...
متن کاملThe Yli‐med Annotation Project Described Here Was Funded by a Grant from Cisco Systems, Inc., for Event Detection for Improved Speaker Diarization and Meeting Analysis. in Addition to Cisco, Icsi's Work on the Yli Corpus
The YLI Multimedia Event Detection corpus is a public-domain index of videos with annotations and computed features, specialized for research in multimedia event detection (MED), i.e., automatically identifying what's happening in a video by analyzing the audio and visual content. The videos indexed in the YLI-MED corpus are a subset of the larger YLI feature corpus, which is being developed by...
متن کاملA First Look at User Switching Behaviors Over Multiple Video Content Providers
Watching videos from multiple content providers (CP) has become prevalent. For individual CPs, understanding user video consumption patterns among CPs is critical for improving on-site user experience and CP’s opportunity of success. In this paper, based on a two-month dataset recording 9 million users’ 269 million video viewing requests over 6 most popular video CPs in China, we provide a firs...
متن کاملVideo understanding using part based object detection models
We will explore the use of event specific object detectors (part based models) in multimedia event detection. The challenge is to choose a well-trained object detector specific to the event videos. The presence of dataset bias would make an object detection model trained on an unrelated image database less useful for the video dataset in hand. Given such a model,we propose an iterative method t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1801.03150 شماره
صفحات -
تاریخ انتشار 2017